Adversarial Training Methods for Deep Learning: A Systematic Review

نویسندگان

چکیده

Deep neural networks are exposed to the risk of adversarial attacks via fast gradient sign method (FGSM), projected descent (PGD) attacks, and other attack algorithms. Adversarial training is one methods used defend against threat attacks. It a schema that utilizes an alternative objective function provide model generalization for both data clean data. In this systematic review, we focus particularly on as improving defensive capacities robustness machine learning models. Specifically, sample accessibility through generation methods. The purpose review survey state-of-the-art robust optimization identify research gaps within field applications. literature search was conducted using Engineering Village (Engineering engineering tool, which provides access 14 patent databases), where collected 238 related papers. papers were filtered according defined inclusion exclusion criteria, information extracted from these strategy. A total 78 published between 2016 2021 selected. Data categorized strategy, bar plots comparison tables show distribution. findings indicate there limitations optimization. most common problems overfitting.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Deep learning-based CAD systems for mammography: A review article

Breast cancer is one of the most common types of cancer in women. Screening mammography is a low‑dose X‑ray examination of breasts, which is conducted to detect breast cancer at early stages when the cancerous tumor is too small to be felt as a lump. Screening mammography is conducted for women with no symptoms of breast cancer, for early detection of cancer when the cancer is most treatable an...

متن کامل

Deep Learning Methods for Classification with Limited Training Data

The human brain has an inherent ability to learn to react to something with just one past experience. The quest for Artificial Intelligence has brought us to the situation where machines simulating the abilities of the human brain are being developed. In this context, the a new flavour of the evergreen classification problem, that is, to classify data having seen few training instances becomes ...

متن کامل

Adversarial Active Learning for Deep Networks: a Margin Based Approach

We propose a new active learning strategy designed for deep neural networks. The goal is to minimize the number of data annotation queried from an oracle during training. Previous active learning strategies scalable for deep networks were mostly based on uncertain sample selection. In this work, we focus on examples lying close to the decision boundary. Based on theoretical works on margin theo...

متن کامل

Robust Deep Reinforcement Learning with Adversarial Attacks

This paper proposes adversarial attacks for Reinforcement Learning (RL) and then improves the robustness of Deep Reinforcement Learning algorithms (DRL) to parameter uncertainties with the help of these attacks. We show that even a naively engineered attack successfully degrades the performance of DRL algorithm. We further improve the attack using gradient information of an engineered loss func...

متن کامل

Gossip training for deep learning

We address the issue of speeding up the training of convolutional networks. Here we study a distributed method adapted to stochastic gradient descent (SGD). The parallel optimization setup uses several threads, each applying individual gradient descents on a local variable. We propose a new way to share information between different threads inspired by gossip algorithms and showing good consens...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Algorithms

سال: 2022

ISSN: ['1999-4893']

DOI: https://doi.org/10.3390/a15080283